Published 02/11/2013
Chaitan Baru. Photo: Alan Decker. |
The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, has named SDSC Distinguished Scientist Chaitanya Baru the Center’s Associate Director, Data Initiatives.
The new position reflects the Center’s focus on addressing both the management and technical aspects of ‘big data’ and other data-enabled applications now becoming pervasive among academia, government, and industry.
Baru, who joined SDSC in 1996, specializes in scientific data management, large-scale data systems, data integration and analytics, and parallel database systems. He also is director of SDSC’s Advanced Cyberinfrastructure Development Group (ACID) and director of the Center for Large-scale Data Systems Research (CLDS).
In his new position, Baru will coordinate and expand upon SDSC’s myriad data-centric initiatives, all aimed at leveraging advancements in high-performance computing and data analysis to accelerate research and discovery.
“Chaitan will be the strategic thinker for where SDSC is headed with data-enabled science, which is emerging as a key differentiator for creating value for both academic researchers as well as commercial enterprises,” said SDSC Director Michael Norman. “This builds on SDSC’s extensive data cyberinfrastructure expertise and adds engagement with industrial sectors that recognize the potential value of ‘big data’, but may require added expertise in building a big data infrastructure on both the hardware and software sides.”
In a recent program solicitation, the NSF described ‘big data’ as “large, diverse, complex, longitudinal, and/or distributed datasets generated from instruments, sensors, Internet transactions, email, video, click streams, and/or all other digital sources available today and in the future.”
Big data applications are characterized by the need to provide timely analytics while dealing with large data volumes, high data rates, and a wide range of data sources. Many of those datasets are so voluminous that most conventional computers and software cannot effectively process them.
“In this era of data-driven science and data-driven enterprises – indeed the data-driven society – vast amounts and diverse types of data are being collected that can now be analyzed and mined for detecting patterns, used for building predictive models, and repurposed and integrated to gain new insights and solve complex problems that we have never been able to address before,” said Baru. “Part of the challenge is also in extracting maximum value from the so-called ‘long tail’ of scientific data, or data that is produced by individual researchers and/or small groups – even as we develop techniques and technologies for dealing with the vast volumes and high velocities of data from what we refer to as the ‘big head’ of science, or the large-scale and high-throughput instruments and outputs from large supercomputer simulations.”
CLDS was established in 2012 as one of SDSC’s centers of excellence, focusing on the technology and technology management aspects of big data. The Center’s current initiatives include development of industry standards for benchmarking big data systems and research into data dynamics, including data growth and data value. Establishment of the center was facilitated by a National Science Foundation (NSF) grant along with sponsorship and support from a number of companies confronting the big data phenomenon, including Seagate, Greenplum, NetApp, Brocade, Mellanox, and Cisco.
BigData Top100 List
The big data benchmarking activity that Baru is coordinating is a community-based effort with participation from industry as well as international participants. CLDS organized the first Workshop on Big Data Benchmarking (WBDB) which was held in May 2012 in San Jose with NSF and industry sponsorship, followed by a second workshop in December 2012 in Pune, India. The third workshop will be held in July 2013 in Xi’an, China.
“With big data becoming a major force of innovation across enterprises of all sizes, new platforms for managing large data sets are being announced almost on a weekly basis with increasingly more features,” added Baru. “Yet, there is currently a lack of any means of comparability among such platforms.”
Later this month, Baru and his CLDS partners will announce details of a new community-based effort called the BigData Top100 List for defining an end-to-end application-layer benchmark for big data applications, with the ability to easily adapt the benchmark specification to evolving challenges in the big data space. The group is currently seeking community input into this process.
“SDSC has a long-standing, well-established reputation as a data-oriented, high-performance computing (HPC) center, and this list is one more example of the leadership role that SDSC is accustomed to playing,” noted Baru about the benchmarking effort. “Going forward, we expect to play a major role not only in data science research and development, but also increasingly in education at all levels, especially focused on the practitioner community.”
About SDSC
As an Organized Research Unit of UC San Diego, SDSC is considered a leader in data-intensive computing and all aspects of ‘big data’, which includes data integration, performance modeling, data mining, software development, workflow automation, and more. SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from earth sciences and biology to astrophysics, bioinformatics, and health IT. With its two newest supercomputer systems, Trestles and Gordon, SDSC is a partner in XSEDE (Extreme Science and Engineering Discovery Environment), the most advanced collection of integrated digital resources and services in the world.
Media Contacts:
Jan Zverina, SDSC Communications
858 534-5111 or jzverina@sdsc.edu
Warren R. Froelich, SDSC Communications
858 822-3622 or froelich@sdsc.edu
San Diego Supercomputer Center: http://www.sdsc.edu/
UC San Diego: http://www.ucsd.edu/
CLDS: http://clds.sdsc.edu
WBDB: http://clds.sdsc.edu/bdbc/workshops